1,006 research outputs found

    A comparative evaluation of nonlinear dynamics methods for time series prediction

    Get PDF
    A key problem in time series prediction using autoregressive models is to fix the model order, namely the number of past samples required to model the time series adequately. The estimation of the model order using cross-validation may be a long process. In this paper, we investigate alternative methods to cross-validation, based on nonlinear dynamics methods, namely Grassberger-Procaccia, K,gl, Levina-Bickel and False Nearest Neighbors algorithms. The experiments have been performed in two different ways. In the first case, the model order has been used to carry out the prediction, performed by a SVM for regression on three real data time series showing that nonlinear dynamics methods have performances very close to the cross-validation ones. In the second case, we have tested the accuracy of nonlinear dynamics methods in predicting the known model order of synthetic time series. In this case, most of the methods have yielded a correct estimate and when the estimate was not correct, the value was very close to the real one

    Quantum phase transitions in fully connected spin models: an entanglement perspective

    Full text link
    We consider a set of fully connected spins models that display first- or second-order transitions and for which we compute the ground-state entanglement in the thermodynamical limit. We analyze several entanglement measures (concurrence, R\'enyi entropy, and negativity), and show that, in general, discontinuous transitions lead to a jump of these quantities at the transition point. Interestingly, we also find examples where this is not the case.Comment: 9 pages, 7 figures, published versio

    ODE parameter inference using adaptive gradient matching with Gaussian processes

    Get PDF
    Parameter inference in mechanistic models based on systems of coupled differential equa- tions is a topical yet computationally chal- lenging problem, due to the need to fol- low each parameter adaptation with a nu- merical integration of the differential equa- tions. Techniques based on gradient match- ing, which aim to minimize the discrepancy between the slope of a data interpolant and the derivatives predicted from the differen- tial equations, offer a computationally ap- pealing shortcut to the inference problem. The present paper discusses a method based on nonparametric Bayesian statistics with Gaussian processes due to Calderhead et al. (2008), and shows how inference in this model can be substantially improved by consistently inferring all parameters from the joint dis- tribution. We demonstrate the efficiency of our adaptive gradient matching technique on three benchmark systems, and perform a de- tailed comparison with the method in Calder- head et al. (2008) and the explicit ODE inte- gration approach, both in terms of parameter inference accuracy and in terms of computa- tional efficiency

    Relative Comparison Kernel Learning with Auxiliary Kernels

    Full text link
    In this work we consider the problem of learning a positive semidefinite kernel matrix from relative comparisons of the form: "object A is more similar to object B than it is to C", where comparisons are given by humans. Existing solutions to this problem assume many comparisons are provided to learn a high quality kernel. However, this can be considered unrealistic for many real-world tasks since relative assessments require human input, which is often costly or difficult to obtain. Because of this, only a limited number of these comparisons may be provided. In this work, we explore methods for aiding the process of learning a kernel with the help of auxiliary kernels built from more easily extractable information regarding the relationships among objects. We propose a new kernel learning approach in which the target kernel is defined as a conic combination of auxiliary kernels and a kernel whose elements are learned directly. We formulate a convex optimization to solve for this target kernel that adds only minor overhead to methods that use no auxiliary information. Empirical results show that in the presence of few training relative comparisons, our method can learn kernels that generalize to more out-of-sample comparisons than methods that do not utilize auxiliary information, as well as similar methods that learn metrics over objects

    Cavallo's Multiplier for in situ Generation of High Voltage

    Get PDF
    A classic electrostatic induction machine, Cavallo's multiplier, is suggested for in situ production of very high voltage in cryogenic environments. The device is suitable for generating a large electrostatic field under conditions of very small load current. Operation of the Cavallo multiplier is analyzed, with quantitative description in terms of mutual capacitances between electrodes in the system. A demonstration apparatus was constructed, and measured voltages are compared to predictions based on measured capacitances in the system. The simplicity of the Cavallo multiplier makes it amenable to electrostatic analysis using finite element software, and electrode shapes can be optimized to take advantage of a high dielectric strength medium such as liquid helium. A design study is presented for a Cavallo multiplier in a large-scale, cryogenic experiment to measure the neutron electric dipole moment.Comment: 9 pages, 10 figure

    Parameter inference in mechanistic models of cellular regulation and signalling pathways using gradient matching

    Get PDF
    A challenging problem in systems biology is parameter inference in mechanistic models of signalling pathways. In the present article, we investigate an approach based on gradient matching and nonparametric Bayesian modelling with Gaussian processes. We evaluate the method on two biological systems, related to the regulation of PIF4/5 in Arabidopsis thaliana, and the JAK/STAT signal transduction pathway

    An introduction to the Generalized Parton Distributions

    Get PDF
    The concepts of Generalized Parton Distributions (GPD) are reviewed in an introductory and phenomenological fashion. These distributions provide a rich and unifying picture of the nucleon structure. Their physical meaning is discussed. The GPD are in principle measurable through exclusive deeply virtual production of photons (DVCS) or of mesons (DVMP). Experiments are starting to test the validity of these concepts. First results are discussed and new experimental projects presented, with an emphasis on this program at Jefferson Lab.Comment: 5 pages, 3 figures Proc. Int. Conf. on Quark Nuclear Physics (QNP2002), to be published in Eur. Phys. Jour.

    Robustness Analysis for Terminal Phases of Re-entry Flight

    Get PDF
    Advancements in the current practices used in robustness analysis for FCS design refinement by introducing a method that takes into account nonlinear effects of multiple uncertainties over the whole trajectory, to be used before robustness is finally assessed with MC analysis has been reported. Current practice in FCS robustness analysis for this kind of application mainly relies on the theory of linear time-invariant (LTI) systems. The method delivers feedback on the causes of requirement violation and adopts robustness criteria directly linked to the original mission or system requirements, such as those employed in MC analyses. The nonlinear robustness criterion proposed in the present work is based on the practical stability and/or finite time stability concepts. The practical stability property improves the accuracy in robustness evaluation with respect to frozen-time approaches, thus reducing the risk of discovering additional effects during robustness verification with Monte Carlo techniques

    MCMC for variationally sparse Gaussian processes

    Get PDF
    Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Code to replicate each experiment in this paper will be available shortly.JH was funded by an MRC fellowship, AM and ZG by EPSRC grant EP/I036575/1 and a Google Focussed Research award.This is the final version of the article. It first appeared from the Neural Information Processing Systems Foundation via https://papers.nips.cc/paper/5875-mcmc-for-variationally-sparse-gaussian-processe
    • …
    corecore